Do We Need Discipline-Specific Academic Word Lists? Linguistics Academic Word List (LAWL)

Authors

Abstract:

This corpus-based study aimed at exploring the most frequently-used academic words in linguistics and compare the wordlist with the distribution of high frequency words in Coxhead’s Academic Word List (AWL) and West’s General Service List (GSL) to examine their coverage within the linguistics corpus. To this end, a corpus of 700 linguistics research articles (LRAC), consisting of approximately 4 million words from four main linguistics sub-disciplines (phonology, morphology, semantics and syntax) was compiled and analyzed based on two criteria; frequency and range. Based on the analysis, a list consisting of 1263 academic word families was produced to provide a useful linguistics academic word list for native and non- native English speakers. Results showed that AWL words account for 10.18 % of the entire LRAC, and GSL words account for 72.48% of the entire LRAC. The findings suggested that of 570 word families in Coxhead’s AWL, 381 (66.84%) word families correspond with the word selections criteria which provide 29.88% of the word families in Linguistics Academic Word List (LAWL). Furthermore, 224 word families that were frequently used in linguistic research article corpus (LRAC) were not listed in GSL and AWL. They accounted for 18.51% of the word families in LAWL with coverage of 5.07% over LRAC, and compared with the 2000 GSL, 658 word families were identified. The results have pedagogical implications for linguistics practitioners and EAP practitioners, researchers, and material designers.

Upgrade to premium to download articles

Sign up to access the full text

Already have an account?login

similar resources

A Corpus-driven Food Science and Technology Academic Word List

The overarching goal of this study was to create a list of the most frequently occurring academic words in Food Science and Technology (FST). To this end, a 4,652,444-word corpus called Food Science and Technology Research Articles (FSTRA), which included 1,421 research articles (RAs) randomly selected from 38 journals across five sub-disciplines in FST, was developed. Frequency and range-based...

full text

Developing a Corpus-Based Word List in Pharmacy Research ‎Articles: A Focus on Academic Culture

The present corpus-based lexical study reports the development of a Pharmacy Academic Word List (PAWL); a list of the most frequent words from a corpus of 3,458,445 tokens made up of 800 most recent pharmacy texts including research articles, review articles, and short communications in four sub-disciplines of pharmacy. WordSmith (Scott, 2017) and AntWordProfiler (Anthony, 2014) were used to sc...

full text

Material Development and English for Academic Purposes Word Lists; a Reductionist Approach

Nagy (1988) states that vocabulary is a prerequisite factor in comprehension. Drawing upon a reductionist approach and having in mind the prospects for material development, this study aimed at creating an English for Academic Purposes Word List (EAPWL). The corpus of this study was compiled from a corpus containing 6479 pages of texts, 2,081,678 million tokens (running words) and 63825 types (...

full text

Do We Really Need the S-word?

This reprint is provided for personal and noncommercial use. For any other use, please send a request to Permissions,

full text

Word Recognition: Do We Need Phonological Representations?

Under what format(s) are spoken words memorized by the brain? Are word forms stored as abstract phonological representations? Or rather, are they stored as detailed acoustic-phonetic representations? (For example as a set of acoustic exemplars associated with each word). We present a series of experiments whose results point to the existence of prelexical phonological processes in word recognit...

full text

Do We Need Chinese Word Segmentation for Statistical Machine Translation?

In Chinese texts, words are not separated by white spaces. This is problematic for many natural language processing tasks. The standard approach is to segment the Chinese character sequence into words. Here, we investigate Chinese word segmentation for statistical machine translation. We pursue two goals: the first one is the maximization of the final translation quality; the second is the mini...

full text

My Resources

Save resource for easier access later

Save to my library Already added to my library

{@ msg_add @}


Journal title

volume 35  issue 3

pages  65- 90

publication date 2016-11-21

By following a journal you will be notified via email when a new issue of this journal is published.

Hosted on Doprax cloud platform doprax.com

copyright © 2015-2023